UCF STIG Viewer Logo
Changes are coming to https://stigviewer.com. Take our survey to help us understand your usage and how we can better serve you in the future.
Take Survey

A private web server will not respond to requests from public search engines.


Overview

Finding ID Version Rule ID IA Controls Severity
V-2260 WG310 SV-2260r5_rule ECLP-1 Low
Description
Search engines are constantly at work on the Internet. Search engines are augmented by agents, often referred to as spiders or bots, which endeavor to capture and catalog web site content. In turn, these search engines make the content they obtain and catalog available to any public web user. Such information in the public domain defeats the purpose of a Limited or Certificate-based web server, provides information to those not authorized access to the web site, and could provide clues of the site’s architecture to malicious parties.
STIG Date
Web Server STIG 2010-10-07

Details

Check Text ( C-29395r1_chk )
This requirement only applies to private web servers.

Query the SA to determine what type of restriction from public search engines is in place.

The use of one or more of the following restrictions will satisfy this requirement:

1. IP address restrictions
2. User IDs and passwords
3. DoD PKI authentication
4. Domain restrictions
5. Implementation of a robots.txt defense

To implement a robots.txt defense:
Place a file named robots.txt into the document root directory. The content of this file should include:

User-agent: *
Disallow: /

The use of any additional vendor-supported methodologies is encouraged.

If access to a private web server by public search engine agents is not restricted, this is a finding.
Fix Text (F-26865r1_fix)
Employ the use of one or more of the following restrictions:

robots.txt file
DoD PKI authentication
User ID and Password
IP Address restrictions
Domain restrictions \

Employ the robots.txt defense:

In the document root directory, include a file named robots.txt that contains at least the following content to disallow any access from robots:

User-agent: *
Disallow: /